Hamilton-Jacobi-Bellman Equations and the Optimal Control of Stochastic Systems

ثبت نشده
چکیده

In many applications (engineering, management, economy) one is led to control problems for stochastic systems : more precisely the state of the system is assumed to be described by the solution of stochastic differential equations and the control enters the coefficients of the equation. Using the dynamic programming principle E. Bellman [6] explained why, at least heuristically, the optimal cost function (or value function) should satisfy a certain partial differential equation called the Hamilton-JacobiBellman equation (HJB in short), which is of the following form

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Near Optimal High Gain Controller For The Non-Minimum Phase Affine Nonlinear Systems

In this paper, a new analytical method to find a near-optimal high gain controller for the non-minimum phase affine nonlinear systems is introduced. This controller is derived based on the closed form solution of the Hamilton-Jacobi-Bellman (HJB) equation associated with the cheap control problem. This methodology employs an algebraic equation with parametric coefficients for the systems with s...

متن کامل

Second Order Hamilton--Jacobi Equations in Hilbert Spaces and Stochastic Boundary Control

The paper is concerned with fully nonlinear second order Hamilton{Jacobi{Bellman{ Isaacs equations of elliptic type in separable Hilbert spaces which have unbounded rst and second order terms. The viscosity solution approach is adapted to the equations under consideration and the existence and uniqueness of viscosity solutions is proved. A stochastic optimal control problem driven by a paraboli...

متن کامل

Hamilton-Jacobi-Bellman equations for Quantum Optimal Feedback Control

We exploit the separation of the …ltering and control aspects of quantum feedback control to consider the optimal control as a classical stochastic problem on the space of quantum states. We derive the corresponding Hamilton-Jacobi-Bellman equations using the elementary arguments of classical control theory and show that this is equivalent, in the Stratonovich calculus, to a stochastic Hamilton...

متن کامل

Hamilton-Jacobi-Bellman equations for Quantum Filtering and Control

We exploit the separation of the filtering and control aspects of quantum feedback control to consider the optimal control as a classical stochastic problem on the space of quantum states. We derive the corresponding Hamilton-Jacobi-Bellman equations using the elementary arguments of classical control theory and show that this is equivalent, in the Stratonovich calculus, to a stochastic Hamilto...

متن کامل

Stochastic Optimal Control of Delay Equations Arising in Advertising Models

We consider a class of optimal control problems of stochastic delay differential equations (SDDE) that arise in connection with optimal advertising under uncertainty for the introduction of a new product to the market, generalizing classical work of Nerlove and Arrow [30]. In particular, we deal with controlled SDDE where the delay enters both the state and the control. Following ideas of Vinte...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010